Shanks Transformation
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Shanks transformation is a
non-linear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other ...
series acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the ...
method to increase the
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of a
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is calle ...
. This method is named after
Daniel Shanks Daniel Shanks (January 17, 1917 – September 6, 1996) was an American mathematician who worked primarily in numerical analysis and number theory. He was the first person to compute π to 100,000 decimal places. Life and education Shanks was b ...
, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.Weniger (2003).


Formulation

For a sequence \left\_ the series :A = \sum_^\infty a_m\, is to be determined. First, the partial sum A_n is defined as: :A_n = \sum_^n a_m\, and forms a new sequence \left\_. Provided the series converges, A_n will also approach the limit A as n\to\infty. The Shanks transformation S(A_n) of the sequence A_n is the new sequence defined byBender & Orszag (1999), pp. 368–375.Van Dyke (1975), pp. 202–205. :S(A_n) = \frac = A_ - \frac where this sequence S(A_n) often converges more rapidly than the sequence A_n. Further speed-up may be obtained by repeated use of the Shanks transformation, by computing S^2(A_n)=S(S(A_n)), S^3(A_n)=S(S(S(A_n))), etc. Note that the non-linear transformation as used in the Shanks transformation is essentially the same as used in
Aitken's delta-squared process In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.Alexa ...
so that as with Aitken's method, the right-most expression in S(A_n)'s definition (i.e. S(A_n) = A_ - \frac) is more numerically stable than the expression to its left (i.e. S(A_n) = \frac). Both Aitken's method and the Shanks transformation operate on a sequence, but the sequence the Shanks transformation operates on is usually thought of as being a sequence of partial sums, although any sequence may be viewed as a sequence of partial sums.


Example

As an example, consider the slowly convergent series : 4 \sum_^\infty (-1)^k \frac = 4 \left( 1 - \frac + \frac - \frac + \cdots \right) which has the exact sum ''π'' ≈ 3.14159265. The partial sum A_6 has only one digit accuracy, while six-figure accuracy requires summing about 400,000 terms. In the table below, the partial sums A_n, the Shanks transformation S(A_n) on them, as well as the repeated Shanks transformations S^2(A_n) and S^3(A_n) are given for n up to 12. The figure to the right shows the absolute error for the partial sums and Shanks transformation results, clearly showing the improved accuracy and convergence rate. The Shanks transformation S(A_1) already has two-digit accuracy, while the original partial sums only establish the same accuracy at A_. Remarkably, S^3(A_3) has six digits accuracy, obtained from repeated Shank transformations applied to the first seven terms A_0, \ldots, A_6. As said before, A_n only obtains 6-digit accuracy after about summing 400,000 terms.


Motivation

The Shanks transformation is motivated by the observation that — for larger n — the partial sum A_n quite often behaves approximately as :A_n = A + \alpha q^n, \, with , q, <1 so that the sequence converges transiently to the series result A for n\to\infty. So for n-1, n and n+1 the respective partial sums are: :A_ = A + \alpha q^ \quad , \qquad A_n = A + \alpha q^n \qquad \text \qquad A_ = A + \alpha q^. These three equations contain three unknowns: A, \alpha and q. Solving for A gives :A = \frac. In the (exceptional) case that the denominator is equal to zero: then A_n=A for all n.


Generalized Shanks transformation

The generalized ''k''th-order Shanks transformation is given as the ratio of the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and ...
s:Bender & Orszag (1999), pp. 389–392. : S_k(A_n) = \frac, with \Delta A_p = A_ - A_p. It is the solution of a model for the convergence behaviour of the partial sums A_n with k distinct transients: :A_n = A + \sum_^k \alpha_p q_p^n. This model for the convergence behaviour contains 2k+1 unknowns. By evaluating the above equation at the elements A_, A_, \ldots, A_ and solving for A, the above expression for the ''k''th-order Shanks transformation is obtained. The first-order generalized Shanks transformation is equal to the ordinary Shanks transformation: S_1(A_n)=S(A_n). The generalized Shanks transformation is closely related to
Padé approximant In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is ap ...
s and
Padé table In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximants :''R'm'', ''n'' to a given complex formal power series. Certain sequences of approximants lying within a Padé table can often b ...
s. Note: The calculation of determinants requires many arithmetic operations to make, however Peter Wynn discovered a recursive evaluation procedure called epsilon-algorithm which avoids to calculate the determinants.


See also

*
Aitken's delta-squared process In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.Alexa ...
*
Anderson acceleration In mathematics, Anderson acceleration, also called Anderson mixing, is a method for the acceleration of the convergence rate of fixed-point iterations. Introduced by Donald G. Anderson, this technique can be used to find the solution to fixed poin ...
*
Rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
*
Richardson extrapolation In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we ...
*
Sequence transformation In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more ...


Notes


References

* * * * * * * * {{ citation , first=P. , last=Wynn , title=Acceleration techniques for iterated vector and matrix problems , journal=Math. Comp. , volume=16 , year=1962 , pages=301–322 Numerical analysis Asymptotic analysis Iterative methods